Search Results: "mlt"

17 August 2012

Petter Reinholdtsen: Half way there with translated docbook version of Free Culture

In my spare time, I currently work on a Norwegian docbook version of the 2004 book Free Culture by Lawrence Lessig, to get a Norwegian text explaining the problems with the copyright law I can give to my parents and others that are reluctant to read an English book. It is a marvellous set of examples on how the ever expanding copyright regulations hurt culture and society. When the translation is done, I hope to find funding to print and ship a copy to all the members of the Norwegian parliament, before they sit down to debate the latest revisions to the Norwegian copyright law. This summer I called for volunteers to help me, and I have been able to secure the valuable contribution from at least one other Norwegian. Two days ago, we finally broke the 50% mark. Then more than 50% of the number of strings to translate (normally paragraphs, but also titles and index entries are also counted). All parts from the beginning up to and including chapter four is translated. So is chapters six, seven and the conclusion. I created a graph to show the progress: The number of strings to translate increase as I insert the index entries into the docbook. They were missing with the docbook version I initially started with. There are still quite a few index entries missing, but everyone starting with A, B, O, Z and Y are done. I currently focus on completing the index entries, to get a complete english version of the docbook source. There is still need for translators and people with docbook knowledge, to be able to get a good looking book (I still struggle with dblatex, xmlto and docbook-xsl) as well as to do the draft translation and proof reading. And I would like the figures to be redrawn as SVGs to make it easy to translate them. Any SVG master around? I am sure there are some legal terms that are unfamiliar to me. If you want to help, please get in touch, and check out the project files currently available from github. If you are curious what the translated book currently look like, the updated PDF and EPUB are published on github. The HTML version is published as well, but github hand it out with MIME type text/plain, confusing browsers, so I saw no point in linking to that version.

10 August 2012

Petter Reinholdtsen: Notes on language codes for Norwegian docbook processing...

In docbook one can specify the language used at the top, and the processing pipeline will use this information to pick the correct translations for 'chapter', 'see also', 'index' etc. And for most languages used with docbook, I guess this work just fine. For example a German user can start the document with <book lang="de">, and the document will show up with the correct content with any of the docbook processors. This is not the case for the language I am working with at the moment, Norwegian Bokm l. For a while, I was confused about which language code to use, because I was unable to find any language code that would work across all tools. I am currently testing dblatex, xmlto, docbook-xsl, and dbtoepub, and they do not handle Norwegian Bokm l the same way. Some of them do not handle it at all. A bit of background information is probably needed to understand this mess. Norwegian is not one, but two written variants. The variants are Norwegian Nynorsk and Norwegian Bokm l. There are three two letter language codes associated with these languages, Norwegian is 'no', Norwegian Nynorsk is 'nn' and Norwegian Bokm l is 'nb'. Historically the 'no' language code was used for Norwegian Bokm l, but many years ago this was found to be bad idea, and the recommendation is to use the most specific language code instead, to avoid confusion. In the transition period it is a good idea to make sure 'no' was an alias for 'nb'. Back to docbook processing tools in Debian. The dblatex tool only understand 'nn'. There are translations for 'no', but not 'nb' (BTS #684391), but due to a bug (BTS #682936) the 'no' language code is not recognised. The docbook-xsl tool chain only recognise 'nn' and 'nb', but not 'no'. The xmlto tool only recognise 'nn' and 'nb', but not 'no'. The end result that there is no language code I can use to get the docbook file working with all of these tools at the same time. :( The correct solution is to use <book lang="nb">, but it will take time before that will work with all the free software docbook processors. :( Oh, the joy of well integrated tools. :/

31 July 2012

Petter Reinholdtsen: Best way to create a docbook book?

I tried to send this text to the docbook-apps mailing list at lists.oasis-open.org, but it only accept messages from subscribers and rejected my post, and I completely lack the bandwidth required to subscribe to another mailing list, so instead I try to post my message here and hope my blog readers can help me out. I am quite new to docbook processing, and am climbing a steep learning curve at the moment. To give you some background, I am working on a Norwegian translation of the book Free Culture by Lawrence Lessig, and I use docbook to handle the process. The files to build the book are available from github. The book got around 400 pages with parts, images, footnotes, tables, index entries etc, which has proven to be a challenge for the free software docbook processors. My build platform is Debian GNU/Linux Squeeze. I want to build PDF, EPUB and HTML version of the book, and have tried different tool chains to do the conversion from docbook to these formats. I am currently focusing on the PDF version, and have a few problems. So I wonder, what would be the best way to create the PDF version of this book? Are some of the bugs found above solved in new or experimental versions of some docbook tool chain? What about HTML and EPUB versions?

30 July 2012

Johannes Schauer: port bootstrap build-ordering tool report 4

A copy of this post is sent to soc-coordination@lists.alioth.debian.org as well as to debian-bootstrap@lists.mister-muffin.de.

Diary

July 2
  • playing around with syntactic dependency graphs and how to use them to flatten dependencies

July 4
  • make work with dose 3.0.2
  • add linux-amd64 to source architectures
  • remove printing in build_compile_rounds
  • catch Not_found exception and print warning
  • use the whole installation set in crosseverything.ml instead of flattened dependencies
  • detect infinite loop and quit in crosseverything.ml
  • use globbing in _tags file
  • use wildcards and patsubst in makefile

July 5
  • throw a warning if there exist binary packages without source packages
  • add string_of_list and string_of_pkglist and adapt print_pkg_list and print_pkg_list_full to use them
  • fix and extend flatten_deps - now also tested with Debian Sid

July 6
  • do not exclude the crosscompiled packages from being compiled in crosseverything.ml
  • clean up basebuildsystem.ml, remove old code, use BootstrapCommon code
  • clean up basenocycles.ml, remove unused code and commented out code
  • add option to print statistics about the generated dependency graph
  • implement most_needed_fast_wrong as well as most_needed_slow_correct and make both available through the menu

July 7
  • allow to investigate all scc, not only the full graph and the scc containing the investigated package
  • handle Not_found in src_list_from_bin_list with warning message
  • handle the event of the whole archive actually being buildable
  • replace raise Failure with failwith
  • handle incorrectly typed package names
  • add first version of reduced_dist.ml to create a self-contained mini distribution out of a big one

July 8
  • add script to quickly check for binary packages without source package
  • make Debian Sid default in makefile
  • add *.d.byte files to .gitignore
  • README is helpful now
  • more pattern matching and recursiveness everywhere

July 9
  • fix termination condition of reduced_dist.ml
  • have precise as default ubuntu distribution
  • do not allow to investigate an already installable package

July 10
  • milestone: show all cycles in a graph
  • add copyright info (LGPL3+)

July 11
  • advice to use dose tools in README

July 16
  • write apt_pkg based python filter script replacing grep-dctrl

July 17
  • use Depsolver.listcheck more often
  • add dist_graph.ml
  • refactor dependency graph code into its own module

July 18
  • improve package selection for reduced_dist.ml
  • improve performance of cycle enumeration code

July 20
  • implement buildprofile support into dose3

July 22
  • let dist_graph.ml use commandline arguments

July 23
  • allow dose3 to generate source package lists without Build- Depends Conflicts -Indep

July 29
  • implement crosscompile support into dose3

Results

Readme There is not yet a writeup on how everything works and how all the pieces of the code work together but the current README file provides a short introduction on how to use the tools.
  • build and runtime dependencies
  • compile instructions
  • execution examples for each program
  • step by step guide how to analyze the dependency situation
  • explanation of general commandline options
A detailed writeup about the inner workings of everything will be part of a final documentation stage.

License All my code is now released under the terms of the LGPL either version 3, or (at your option) any later version. A special linking exception is made to the license which can be read at the top of the provided COPYING file. The exception is necessary because Ocaml links statically, which means that without that exception, the conditions of distribution would basically equal GPL3+.

reduced_dist.ml Especially the Debian archive is huge and one might want to work on a reduced selection of packages first. Having a smaller selection of the archive would be significantly faster and would also not add thousands of packages that are not important for an extended base system. I call a reduced distribution a set of source packages A and a set of binary packages B which fulfill the following three properties:
  • all source packages A must be buildable with only binary packages B being available
  • all binary packages B except for architecture:all packages must be buildable from source packages A
The set of binary packages B and source packages A can be retrieved using the reduced_dist program. It allows to either build the most minimal reduced distribution or one that includes a certain package selection. To filter out the package control stanzas for a reduced distribution from a full distribution, I originally used a call to grep-dctrl but later replaced that by a custom python script called filter-packages.py. This script uses python-apt to filter Packages and Sources files for a certain package selection.

dist_graph.ml It soon became obvious that there were not many independent dependency cycle situation but just one big scc that would contain 96% of the packages that are involved in build dependency cycles. Therefor it made sense to write a program that does not iteratively build the dependency graph starting from a single package, but which builds a dependency graph for a whole archive.

Cycles I can now enumerate all cycles in the dependency graph. I covered the theoretical part in another blog post and wrote an email about the achievement to the list. Both resources contain more links to the respective sourcecode. The dependency graph generated for Debian Sid has 39486 vertices. It has only one central scc with 1027 vertices and only eight other scc with 2 to 7 vertices. All the other source and binary packages in the dependency graph for the archive are degenerate components of length one. Obtaining the attached result took 4 hours on my machine (Core i5 @ 2.53GHz). 1.5 h of that were needed to build the dependency graph, the other 2.5 hours were needed to run johnson's algorithm on the result. Memory consumption of the program was at about 700 MB. It is to my joy that apparently the runtime of the cycle finding algorithm for a whole Debian Sid repository as well as the memory requirements are within orders of magnitude that are justifiable when being run on off-the-shelf hardware. It must also be noted that nothing is optimized for performance yet. A list of all cycles in Debian Sid up to length 4 can be retrieved from this email. This cycle analysis assumes that only essential packages, build-essential and dependencies and debhelper are available. Debhelper is not an essential or build-essential package but 79% of the archive build-depends on it. The most interesting cycles are probably those of length 2 that need packages that they build themselves. Noticeable examples for these situations are vala, python, mlton, fpc, sbcl and ghc. Languages seem love to need themselves to be built.

Buildprofiles There is a long discussion of how to encode staged build dependency information in source packages. While the initial idea was to use Build-Depends-StageN fields, this solution would duplicate large parts of the Build-Depends field, which leads to bitrot as well as it is inflexible to possible other build "profiles". To remedy the situation it was proposed to use field names like Build-Depends[stage1 embedded] but this would also duplicate information and would break with the rfc822 format of package description files. A document maintained by Guillem Jover gives even more ideas and details. Internally, Patrick and me decided for another idea of Guillem Jover to annotate staged build dependencies. The format reads like:
Build-Depends: huge (>= 1.0) [i386 arm] <!embedded !bootstrap>, tiny
So each build profile would follow a dependency in <> "brackets" an have a similar format as architecture options. Patrick has a patch for dpkg that implements this functionality while I patched dose3.

Dropping Build-Depends-Indep and Build-Conflicts-Indep When representing the dependencies of a source package, dose3 concatenates its Build-Depends and Build-Depends-Indep dependency information. So up to now, a source package could only be compiled, if it manages to compile all of its binary packages including architecture:all packages. But when bootstrapping a new architecture, it should be sufficient to only build the architecture dependent packages and therefor to only build the build-arch target in debian/rules and not the build-indep target. Only considering the Build-Depends field and dismissing the Build-Depends-Indep field, reduced the main scc from 1027 vertices to 979 vertices. The amount of cycles up to length four reduced from 276 to 206. Especially the cycles containing gtk-doc-tools, doxygen, debiandoc-sgml and texlive-latex-base got much less. Patrick managed to add a Build-Depends-Indep field to four packages so far which reduced the scc further by 14 vertices down to 965 vertices. So besides staged build dependencies and cross building there is now a third method that can be applied to break dependency cycles: add Build-Depends-Indep information to them or update existing information. I submitted a list of packages that have a binary-indep and/or a build-indep target in their debian/rules to the list. I also submitted a patch for dose3 to be able to specify to ignore Build-Depends-Indep and Build-Conflicts-Indep information.

Dose3 crossbuilding So far I only looked at dependency situations in the native case. While the native case contains a huge scc of about 1000 packages, the dependency situation will be much nicer when cross building. But dose3 was so far not able to simulate cross building of source packages. I wrote a patch that implements this functionality and will allow me to write programs that help analyze the cross-situation as well.

Debconf Presentation Wookey was giving a talk at debconf12 for which I was supplying him with slides. The slides in their final version can be downloaded here

Future Patrick maintains a list of "weak" build dependencies. Those are dependencies that are very likely to be droppable in either a staged build or using Build-Depends-Indep. I must make use of this list to make it easier to find packages that can easily be removed of their dependencies. I will have to implement support for resolving the main scc using staged build dependencies. Since it is unlikely that Patrick will be fast enough in supplying me with modified packages, I will need to create myself a database of dummy packages. Another open task is to allow to analyze the crossbuilding dependency situation. What I'm currently more or less waiting on is the inclusion of my patches into dose3 as well as a decision on the buildprofile format. More people need to discuss about it until it can be included into tools as well as policy. Every maintainer of a package can help making bootstrapping easier by making sure that as many dependencies as possible are part of the Build-Depends-Indep field.

21 May 2012

Johannes Schauer: sisyphus wins ICRA 2012 VMAC

Sisyphus is a piece of software that I wrote as a member of a team from Jacobs University led by Prof. Dr. Andreas N chter. It managed to place our team first in this year's IEEE ICRA 2012 Virtual Manufacturing Automation Competition in all three rounds. The goal was, to stack a given set of boxes of different length, height and width on a pallet in a way that achieved optimal volume utilization, center of mass and interlock of the boxes. Besides the cartesian placement of a box on the pallet, the only other degree of freedom was a 90 rotation of the box around a vertical axis. Since the arrangement of boxes into a three dimensional container is NP hard (three dimensional orthogonal knapsack), I decided for a heuristic for an approximate solution. The premise is, that there are many boxes of equal height which was the case in the test cases that were available from the 2011 VMAC. Given this premise, my heuristic was, to arrange the boxes into layers of equal height and then stack these layers on top of each other. A set of boxes that would be left over or too little from the start to form its own full layer, would then be stacked on the top of the completed layers. There is a video of how this looked like. My code is now online on github and it even documented for everybody who is not me (or a potential future me of course). This blog post is about the "interesting" parts of sisyphus. You can read about the overall workings of it in the project's README. Python dict to XML and XML to Python dict The evaluation program for the challenge is reading XML files and the pallet size and the list of articles with their sizes are also given in XML format. So I had to have a way to easily read article information from XML and to easily dump my data into XML format. Luckily, all the XML involved was not making use of XML attributes at all, so the only children a node had, where other nodes. Thus, the whole XML file could be represented as an XML dictionary with keys being tagnames and the values being other dictionaries or lists or strings or integers. The code doing that uses xml.etree.ElementTree and turns out to be very simple:
from xml.etree import ElementTree

def xmltodict(element):
def xmltodict_handler(parent_element):
result = dict()
for element in parent_element:
if len(element):
obj = xmltodict_handler(element)
else:
obj = element.text
if result.get(element.tag):
if hasattr(result[element.tag], "append"):
result[element.tag].append(obj)
else:
result[element.tag] = [result[element.tag], obj]
else:
result[element.tag] = obj
return result
return element.tag: xmltodict_handler(element)

def dicttoxml(element):
def dicttoxml_handler(result, key, value):
if isinstance(value, list):
for e in value:
dicttoxml_handler(result, key, e)
elif isinstance(value, basestring):
elem = ElementTree.Element(key)
elem.text = value
result.append(elem)
elif isinstance(value, int) or isinstance(value, float):
elem = ElementTree.Element(key)
elem.text = str(value)
result.append(elem)
elif value is None:
result.append(ElementTree.Element(key))
else:
res = ElementTree.Element(key)
for k, v in value.items():
dicttoxml_handler(res, k, v)
result.append(res)
result = ElementTree.Element(element.keys()[0])
for key, value in element[element.keys()[0]].items():
dicttoxml_handler(result, key, value)
return result

def xmlfiletodict(filename):
return xmltodict(ElementTree.parse(filename).getroot())

def dicttoxmlfile(element, filename):
ElementTree.ElementTree(dicttoxml(element)).write(filename)

def xmlstringtodict(xmlstring):
return xmltodict(ElementTree.fromstring(xmlstring))

def dicttoxmlstring(element):
return ElementTree.tostring(dicttoxml(element))
Lets try this out:
>>> from util import xmlstringtodict, dicttoxmlstring
>>> xmlstring = "<foo><bar>foobar</bar><baz><a>1</a><a>2</a></baz></foo>"
>>> xmldict = xmlstringtodict(xmlstring)
>>> print xmldict
 'foo':  'baz':  'a': ['1', '2'] , 'bar': 'foobar' 
>>> dicttoxmlstring(xmldict)
'<foo><baz><a>1</a><a>2</a></baz><bar>foobar</bar></foo>'
The dict container doesnt preserve order, but as XML doesnt require that, this is also not an issue. Arranging items in layers When it was decided, that I wanted to take the layered approach, it boiled down the 3D knapsack problem to a 2D knapsack problem. The problem statement now was: how to best fit small rectangles into a big rectangle? I decided for a simple and fast approach as it is explained in Jake Gordon's blog article. There is a demo of his code and should the site vanish from the net, the code is on github. This solution seemed to generate results that were "good enough" while simple to implement and fast to execute. If you look very hard, you can still see some design similarities between my arrange_spread.py and his packer.js code. Jake Gordon got his idea from Jim Scott who wrote an article of arranging randomly sized lightmaps into a bigger texture. There is also an ActiveState Code recipe from 2005 which looks very similar to the code by Jake Gordon. The posts of Jake Gordon and Jim Scott explain the solution well, so that I dont have to repeat it. Should the above resources go offline, I made backups of them here and here. There is also a backup of the ActiveState piece here. Spreading items out The algorithm above would cram all rectangles into a top-left position. As a result, there would mostly be space at the bottom and left edge of the available pallet surface. This is bad for two reasons:
  1. the mass is distributed unequally
  2. articles on the layer above at the bottom or left edge, are prone to overhang too much so that they tumble down
Instead all articles should be spread over the available pallet area, creating small gaps between them instead big spaces at the pallet borders. Since articles were of different size, it was not clear to me from the start what "equal distribution" would even mean because it was obvious that it was not as simple as making the space between all rectangles equal. The spacing had to be different between them to accommodate for differently sized boxes. The solution I came up with, made use of the tree structure, that was built by the algorithm that arranged the rectangles in the first place. The idea is, to spread articles vertically first, recursively starting with the deepest nodes and spreading them out in their parent rectangle. And then spreading them horizontally, spreading the upper nodes first, recursively resizing and spreading child nodes. The whole recursion idea created problems of its own. One of the nicest recursive beauty is the following function:
def get_max_horiz_nodes(node):
if node is None or not node['article']:
return [], []
elif node['down'] and node['down']['article']:
rightbranch, sr = get_max_horiz_nodes(node['right'])
rightbranch = [node] + rightbranch
downbranch, sd = get_max_horiz_nodes(node['down'])
ar = rightbranch[len(rightbranch)-1]['article']
ad = downbranch[len(downbranch)-1]['article']
if ar['x']+ar['width'] > ad['x']+ad['width']:
return rightbranch, sr+[downbranch[0]]
else:
return downbranch, sd+[rightbranch[0]]
else:
rightbranch, short = get_max_horiz_nodes(node['right'])
return [node] + rightbranch, short
get_max_horiz_nodes() traverses all branches of the tree that node has below itself and returns a tuple containing the list of nodes that form the branch that stretches out widest plus the list of nodes that are in the other branches (which are shorter than the widest). Another interesting problem was, how to decide on the gap between articles. This was interesting because the number resulting of the subtraction of the available length (or width) and the sum of the articles lengths (or widths), was mostly not divisible by the amount of gaps without leaving a rest. So there had to be an algorithm that gives me a list of integers, neither of them differing by more than one to any other, that when summed up, would give me the total amount of empty space. Or in other words: how to divide a number m into n integer pieces such that each of those integers doesnt differ more than 1 from any other. Surprisingly, generating this list doesnt contain any complex loop constructs:
>>> m = 108 # total amount
>>> n = 7 # number of pieces
>>> d,e = divmod(m, n)
>>> pieces = (e)*[(d+1)]+(n-e)*[d]
>>> print pieces
[16, 16, 16, 15, 15, 15, 15]
>>> sum(pieces) == m
True
>>> len(pieces) == n
True
You can test out the algorithms that arrange rectangles and spread them out by cloning the git and then running:
PYTHONPATH=. python legacy/arrange_spread.py
The results will be svg files test1.svg and test2.svg, the latter showing the spread-out result. Here is an example how the output looks like (without the red border which is drawn to mark the pallet border): arrange_spread2.py contains an adaption of arrange_spread.py for the actual problem. Permutations without known length When creating a layer out of articles of same height, then there are four strategies that I can choose from. It is four because there are two methods that I can either use or not. I can rotate the article by 90 per default or not and I can rotate the pallet or not. So every time that I build a new layer, there are those four options. Depending on which strategy I choose, there is a different amount of possible leftover articles that did not fit into any layer. The amount is different because each strategy is more or less efficient. To try out all combinations of possible layer arrangements, I have to walk through a tree where at each node I branch four times for each individual strategy. Individual neighboring nodes might be the same but this outcome is unlikely due to the path leading to those neighboring nodes being different. To simplify, lets name the four possible strategies for each layers 0, 1, 2 and 3. I now want an algorithm that enumerates through all possible permutations of those four numbers for "some" length. This is similar to counting. And the itertools module comes with the product() method that nearly does what I want. For example, should I know that my tree does not become deeper than 8 (read: no more than eight layers will be built), then I can just run:
>>> for i in itertools.product([0,1,2,3], repeat=8):
...     print i
...
(0,0,0,0,0,0,0,0)
(0,0,0,0,0,0,0,1)
(0,0,0,0,0,0,0,2)
(0,0,0,0,0,0,0,3)
(0,0,0,0,0,0,1,0)
(0,0,0,0,0,0,1,1)
(0,0,0,0,0,0,1,2)
(0,0,0,0,0,0,1,3)
This would work if the number of layers created with each strategy was the same. But as each strategy behaves differently depending on the input, it cannot be known before actually trying out a sequence of strategies, how many layers it will yield. The strategy (0,0,0,0,0,0,0,0) might create 7 layers, resulting in (0,0,0,0,0,0,0,1), (0,0,0,0,0,0,0,2) and (0,0,0,0,0,0,0,3) yielding the same output as only the first 7 strategies count. This would create duplicates which I should not waste cpu cycles on later. It might also be that (0,0,0,0,0,0,1,0) turns out to be a combination of strategies that creates more than 8 layers in which case the whole thing fails. So what I need is a generator, that gives me a new strategy depending on how often it is asked for one. It should dynamically extend the tree of possible permutations to accommodate for any size. Since the tree will become humongous (4^11 = 4194304), already traversed nodes should automatically be cleaned so that only the nodes that make the current list of strategies stays in memory at any point in time. This sounded all complicated which made me even more surprised by how simple the solution was in the end. Here a version of the algorithm that could easily be ported to C:
class Tree:
def __init__(self, branch_factor):
self.branch_factor = branch_factor
self.root = "value": None, "parent": None, "children": []
self.current = self.root

def next(self):
if not self.current["children"]:
self.current["children"] = [ "value":val, "parent":self.current, "children":[] for val in range(self.branch_factor)]
self.current = self.current["children"][0]
return self.current["value"]

def reset(self):
if self.current["parent"]:
self.current["parent"]["children"].pop(0)
else:
return False
if self.current["parent"]["children"]:
self.current = self.root
return True
else:
self.current = self.current["parent"]
return self.reset()

def __str__(self):
return str(self.root)
It would be used like this:
>>> tree = Tree(4)
>>> print tree.next(), tree.next(), tree.next()
>>> while tree.reset():
...     print tree.next(), tree.next(), tree.next()
Which would be equivalent to calling itertools.product([1,2,3,4], 3). The special part is, that in each iteration of the loop I can call tree.next() an arbitrary amount of times, just how much it is needed. Whenever I cannot generate an additional layer anymore, I can call tree.reset() to start a new permutation. For my code I used a python specific version which is a generator:
def product_varlength(branch_factor):
root = "value": None, "parent": None, "children": []
current = root
while True:
if not current["children"]:
current["children"] = [ "value":val, "parent":current, "children":[] for val in range(branch_factor)]
current = current["children"][0]
if (yield current["value"]):
while True:
if current["parent"]:
current["parent"]["children"].pop(0)
else:
return
if current["parent"]["children"]:
current = root
break
else:
current = current["parent"]
It is used like this:
it = product_varlength(4)
print it.next(), it.send(False), it.send(False)
while True:
    print it.send(True), it.send(False), it.send(False)
Again, the expression in the loop can have any number of it.send(False). The first it.send(True) tells the generator to do a reset.

22 November 2011

Rapha&#235;l Hertzog: People behind Debian: Stefano Zacchiroli, Debian Project Leader

picture by Tiago Bortoletto Vaz, CC BY-NC-SA 2.0


It s been one year since the first People behind Debian interview. For this special occasion, I wanted a special guest and I m happy that our Debian Project Leader (DPL) Stefano Zacchiroli accepted my invitation. He has a difficult role in the community, but he s doing a really great job of it. He s a great mediator in difficult situations, but he s also opinionated and can push a discussion towards a conclusion. Read on to learn how he became a Debian developer and later DPL, what he s excited about in the next Debian release, and much more. Raphael: Who are you? Stefano: I m Stefano Zacchiroli, but I prefer to be called Zack, both on the Internet and in real life. I m 32, Italian, emigrated to France about 4 years ago. I live in Paris, and I find it to be one of the most gorgeous and exciting cities in the world. As my day job I m a Computer Science researcher and teacher at University Paris Diderot and IRILL. In my copious free time I contribute to Debian, and I m firmly convinced that doing so is an effective way to help the cause of Free Software. Besides, I find it to be a lot of fun! Raphael: How did you start contributing to Debian? Stefano: Flash back to 1999, when I was a 2nd year student in Computer Science at the University of Bologna. Back then in Italy it was uncommon for young geeks to get exposed to Free Software: Internet was way less pervasive than today and most computer magazines didn t pay much attention to GNU/Linux. Luckily for me, the professor in charge of the student lab was a Free Software enthusiast and all students machines there were running Debian. Not only that, but there was also a student program that allowed volunteers to become sysadmins after having shown their skills and convinced the director they were trustworthy. Becoming one of those volunteer Debian admins quickly became one of my top objectives for the year, and that is were I ve learned using Debian. The year after that, I got in touch with a research group that was to become the happy bunch of hackers with whom I would have done both my master and PhD theses. They were designing a new proof assistant. Most of the development was in OCaml and happened on Debian. OCaml was available in Debian, but many of the libraries we needed were not. So I approached the Debian OCaml Team offering to help. Before I realize what was going on I was (co-)maintainer of tens of OCaml-related packages. At some point I got told I think you should apply as a Debian Developer . So I did and in a couple of months I went through the New Member (NM) process, that was back then in its infancy. I still remember my happiness while reading the account created mail, the day after my 22nd birthday. I know the NM process went through some bad publicity in the past, but I m happy to see that nowadays the process can be as swift as it has been for me 10 years ago. Raphael: It s your second year as Debian Project Leader (DPL). Are you feeling more productive in the role? Do you fear to burn out? Stefano: I m feeling way more productive, no doubts. The task of the Debian Project Leader is not necessarily difficult, but it is a complex and scarcely documented one. It is also profoundly different from any other task that Debian people usually work on, so that experience doesn t help much in getting started. Before becoming effective as DPL one needs to get to know many people and mechanisms he is not familiar with. More importantly, one needs to set up a personal work-flow that allows to keep up with day-to-day DPL tasks (which are aplenty) as well as with urgencies (that tend to pop-up in the leader@debian.org INBOX at the least convenient time). Finally, one also needs to do proper traffic shaping and always retain enough motivation bandwidth to keep the Project informed about what is going on in DPL-land. Finding the right balance among all these ingredients can take some time. Once one is past it, everything goes way more smoothly. The above is why I m constantly encouraging people interested in running for DPL in the future to reach out to me and work on some tasks of the current DPL s TODO list. I swear it is not just a cheap attempt at slavery!. It is rather an attempt at DPL mentoring that could be beneficial: both to give future candidates more awareness of the task, and to reduce the potential downtime when handing over from one DPL to the next. Regarding burn out, I don t feel prone to its risk these days. If I look back, I can say that my contributions as DPL have been pretty constant in volume over time; my enthusiasm for the task, if anything, is on the rise. The effectiveness of my contributions as DPL are, on the other hand, not mine to judge. Raphael: If you had to single out two achievements where you were involved as DPL, what would they be? Stefano: I d go for the following two, in no particular order: OK, let me cheat and add a third one I m also proud of having been able to report to the Project my whereabouts as DPL, thoroughly and periodically, since the very beginning is first term. People annoyed by my reporting logorrhea now have all my sympathies. Raphael: Among the possible new features of Debian Wheezy, which one gets you excited most? Stefano: It s multi-arch, no doubt. Even though it is not a directly user visible change, it s a very far reaching one. It is also one of those changes that make me feel that moment of truth of coders, when you realize you are finally doing the right thing and ditching piles of ugly hacks.
It s multi-arch [ ] you realize you are finally doing the right thing and ditching piles of ugly hacks.
Raphael: If you were not DPL and could spend all your time on Debian, what project would you do? Stefano: I would sit down and do software development for Debian. It s impressive how many important and beneficial changes for Debian could be delivered by specific software improvements in various parts of our infrastructure. We tend to attract many packagers, but not so many people willing to maintain Debian infrastructure softwares like dak, britney, debbugs, the PTS, etc. Their maintenance burden then falls on the shoulders of the respective teams which are generally very busy with other important tasks. As a project, we seem to be more appealing to packagers than to software developers. That is a pity given the amount of exciting coding tasks that are everywhere in Debian. Part of the reason we are not appealing to developers is that we are not particularly good at collecting coding tasks in a place where interested developers could easily pick them up. It also takes quite a bit of inside knowledge to spot infrastructure bugs and understand how to fix them. I long for some spare hacking time to check if I m still good enough of a coder to hunt down longstanding bugs in our infrastructure, which have ended up being my pet peeves. I d also love to dive again into RCBW. It s less committing than package maintenance, more diverse and challenging, and also an immensely useful activity to get Debian releases done. Raphael: Martin Michlmayr is worried that there is so few paid opportunities around Debian. Do you agree with his sentiment, and if yes do you have ideas on how to improve this situation? Stefano: The idealistic me wishes Debian to be a community made only of volunteers that devote their free time to the Project. Oh, and that me also wishes Debian to be competitive with similar projects, no matter how many full-time employees others have! That is coherent with a view of society where everyone has a day job, but also engages in volunteering activities ensuring that public interest is pursued by people motivated by interests other than profit. But I do realize that for Free Software to succeed companies, employees, and salaries should all have a role. I admire projects that strike a good balance between volunteer and paid work. The Linux kernel is emblematic in that respect: many developers are paid by companies that have a commercial or strategic interest in Linux. Nevertheless volunteers contributions are aplenty and the Linux community gives a convincing impression that choices are driven by the community itself (or by its benevolent dictator) without money-driven impositions.
I do realize that for Free Software to succeed companies, employees, and salaries should all have a role.
Such an ecosystem does not exist around Debian. We do have a partner program that allows for it to happen, but we have very few partners with an interest in doing distribution development work. Like Martin, I m worried by this state of affairs, because it de facto means we lag behind in terms of available people power. In a community of volunteers, that might frustrate people and that is not good. To improve over the status quo the first step is to federate together small and medium companies that have a strategic interest in Debian and listen to their needs. I m already in touch with representatives of such companies that, in many cases, already employ Debian Developers to do some distribution work in Debian. We will be soon sending out a call to reach out to more such companies, but since we are discussing this, why waiting? If some of our readers here are representative of such companies, I encourage them to get in touch with me about this. Raphael: You know that the fundraising campaign for the Debian Administrator s Handbook is on good track but the liberation of the book is not yet assured. What do you think of this project? Stefano: I m happy about the project, to the point that I ve accepted writing a testimonial for it :-) . I m sad about the scarce availability of up to date and high quality (DFSG-)Free books about Debian and I welcome any initiative that might help closing that gap.
I m sad about the scarce availability of up to date and high quality (DFSG-)Free books about Debian.
Free Culture is a great offspring of Free Software and I m convinced we need to stand up against double standards in the two camps. Letting aside software-specific licensing details, the basic freedoms to be defended are the same. They are those freedoms that ensure that a reader is in full control of his book, pretty much as they ensure that a computer user is in full control of the software that runs on it. I m therefore proud that Debian has long resolved that the Debian Free Software Guidelines (DFSG) apply not only to software but also to books and other pieces of documentation. But the status quo implies that not only we have very few up to date, high quality books about Debian. It also implies that, at present, we have no such book that we can distribute in the Debian archive, showing off the Free Software (and Free Culture!) values we stand for.
Crowdfunding is considered to be a good mate for Free Culture, where the services model that applies to Free Software is more difficult to exploit. I so wish any luck to yours and Roland s initiative. A different matter is whether Debian, as a project, should endorse the initiative and actively campaign for it. As you know, I think it should not. While we do advertise general project donations, we don t do mission-specific fundraising campaign for Debian itself. Coherently with that, I don t think we should relay crowdfunding campaigns for 3rd parties, even when the result would be beneficial to Debian. Raphael: Is there someone in Debian that you admire for their contributions? Stefano: There are two classes of people that I particularly admire in Debian:
Thank you to Zack for the time spent answering my questions. I hope you enjoyed reading his answers as I did.

Subscribe to my newsletter to get my monthly summary of the Debian/Ubuntu news and to not miss further interviews. You can also follow along on Identi.ca, Google+, Twitter and Facebook .

7 comments Liked this article? Click here. My blog is Flattr-enabled.

30 September 2011

Axel Beckert: Fun facts from the UDD

After spotting an upload of mira, who in turn spotted an upload of abe (the package, not an upload by me aka abe@d.o), mira (mirabilos aka tg@d.o) noticed that there are Debian packages which have same name as some Debian Developers have as login name. Of course I noticed a long time ago that there is a Debian package with my login name abe . Another well-known Debian login and former package name is amaya. But since someone else came up with that thought, too, it was time for finding the definite answer to the question which are the DD login names which also exist as Debian package names. My first try was based on the list of trusted GnuPG keys:
$ apt-cache policy $(gpg --keyring /etc/apt/trusted.gpg --list-keys 2>/dev/null   \
                     grep @debian.org   \
        	     awk -F'[<@]' ' print $2 '   \
                     sort -u) 2>/dev/null   \
                   egrep -o '^[^ :]*'
alex
tor
ed
bam
ng
But this was not satisfying as my own name didn t show up and gpg also threw quite a lot of block reading errors (which is also the reason for redirecting STDERR). mira then had the idea of using the Ultimate Debian Database to answer this question more properly:
udd=> SELECT login, name FROM carnivore_login, carnivore_names
      WHERE carnivore_login.id=carnivore_names.id AND login IN
      (SELECT package AS login FROM packages, active_dds
       WHERE packages.package=active_dds.login UNION
       SELECT source AS name FROM sources, active_dds
       WHERE sources.source=active_dds.login)
      ORDER BY login;
 login                   name
-------+---------------------------------------
 abe     Axel Beckert
 alex    Alexander List
 alex    Alexander M. List  4402020774 9332554
 and     Andrea Veri
 ash     Albert Huang
 bam     Brian May
 ed      Ed Boraas
 ed      Ed G. Boraas [RSA Compatibility Key]
 ed      Ed G. Boraas [RSA]
 eric    Eric Dorland
 gq      Alexander GQ Gerasiov
 iml     Ian Maclaine-cross
 lunar   J r my Bobbio
 mako    Benjamin Hill
 mako    Benjamin Mako Hill
 mbr     Markus Braun
 mlt     Marcela Tiznado
 nas     Neil A. Schemenauer
 nas     Neil Schemenauer
 opal    Ola Lundkvist
 opal    Ola Lundqvist
 paco    Francisco Moya
 paul    Paul Slootman
 pino    Pino Toscano
 pyro    Brian Nelson
 stone   Fredrik Steen
(26 rows)
Interestingly tor (Tor Slettnes) is missing in this list, so it s not complete either At least I m quite sure that nobody maintains a package with his own login name as package name. :-) We also have no packages ending in -guest , so there s no chance that a package name matches an Alioth guest account either

9 September 2011

Rapha&#235;l Hertzog: People behind Debian: Enrico Zini, member of the New Maintainer Frontdesk

Enrico ZiniEven though Enrico is not smiling on this picture, he s one of the friendliest Debian person that I know. I always enjoy his presentations because he can t refrain from inserting jokes or other funny tricks. :-) That said he s serious too, there s lots of good stuff that he has developed over the years (starting with Debtags) and he has put a lot of effort in reforming the New Maintainer process. Read on to learn more about his various projects. Raphael: Who are you? Enrico: Hi, I m Enrico Zini, a DD from Italy. I m 35 and I work as a freelance Free Software developer. One of my historical roles in Debian is taking care of Debtags, but that is not all I do: my paid work led me to write and maintain some weather forecast related software in Debian, and recently I gained a Front Desk hat, and then a DAM hat. Raphael: How did you start contributing to Debian? Enrico: It was 2001, I was at uni, I was using Debian. At some point I wanted to learn packaging so I read through the whole Policy from top to bottom. Then I thought: why package only for myself? . There were many DDs at my uni, and it only seemed natural to me to join Debian as well. Evidently this was also natural for Zack [Note from editor: Stefano Zacchiroli], who had become DD 6 months earlier and didn t hesitate to advocate me. I found the Policy and the Developer s Reference to be very interesting things to read, and I welcomed my AM s questions as an excuse to learn more. I completely understand those people who have fun trying to answer all the questions in the NM templates while they wait for an AM. With my super DAM powers I can see that my AM report was submitted on October 16, 2001 by my AM Martin Michlmayr, and that James Troup created my account 9 days later, on October 25. Raphael: You have a special interest in the New Maintainer (NM) Process since you are a Debian Account Manager (DAM) and a member of the NM Frontdesk. Thanks to your work the process is much less academic/bureaucratic than it used to be. Can you remember us the main changes? Enrico: One of the first things I noticed when I become a Front Desk member is that there was a tendency to advocate people too early, thinking by the time they ll get an AM, they ll know enough . Unfortunately, this didn t always work, and once the real NM process started it would turn into a very long and demotivating experience both for the applicant and for the AM. So we tried raising the bar on advocacies, and that seems to have helped a lot. If people join NM when they are ready, it means that NM is quick and painless both for them and their AMs, who are therefore able to process more applicants. We also did a rather radical cleanup of the NM templates , which are a repository of questions that Application Managers can ask to their applicants. We realized that AMs were just sending the whole templates to their applicants, so we moved all non-essential questions to separate files, to drastically reduce the amounts of questions that are asked by default. Other improvements in the NM process came from other parts of Debian: nowadays there are lots of ways to learn and gradually gain experience and reputation inside the project before joining NM, which means that we get many candidates who we can process quickly. For example, packages can now be uploaded via sponsors, and the Mentors project helps new contributors to find sponsors and get their first packages reviewed. Then one can now become Debian Maintainer and take full responsibility of their own packages, gaining experience and reputation. The idea of working in teams also helped: big teams like the Perl, Python, KDE, Ocaml, Haskell teams (and many more) are excellent entry points for people who have something to package. But Debian is not just for packagers, and one could join teams like the Website team, the Press and Publicity team, the Events and Merchandise team or their local translation team. Becoming DDs the non-uploading way is not just for non-technical people: one could enjoy programming but not packaging. An interesting way to get involved in that way is to help writing or maintaining some of the many Debian services. Note that I m not suggesting this as a way to learn how to program, but as a way to get involved in Debian by writing code. Finally, we started to appreciate the importance of having people activities in Debian explicitly visible, which means that the more obviously good work one has done in Debian, the less questions we need to ask. Jan Dittberner s DDPortfolio is an excellent resource for AMs and Front Desk, and I m maintaining a service called minechangelogs that for people who have done lots of work in Debian is able to fully replace the Tasks&Skills parts of the NM process. Raphael: What are your plans for Debian Wheezy? Enrico: For Wheezy I hope to be able to streamline and simplify Debtags a bit more. At Debconf11 I had a conversation with FTP-masters on how to make some tags more official, and I now have to work a bit more on that. I d also like to considerably downsize the codebase behind the debtags package, now that its job is quite clear and I don t need to experiment with fancy features the way I did in the past. I have to say that I enjoy programming more than I enjoy packaging, so most of my plans in Debian are not tied to releases. For example, I d like to finish and deploy the new NM website codebase soon: it would mean to have a codebase that s much easier to maintain, and in which it s much easier to implement new features. I d also like to design a way to allow maintainers to review the tag submission to their own packages instead of having to wait for me or David Paleino to do a regular review of all the submissions. Finally, I d like to promote the usage of apt-xapian-index in more cutting-edge packaging applications. And to design a way to maintain up to date popcon information in one s local index. And improve and promote those services that I maintain, and I tend to often have ideas for new ones. Raphael: If you could spend all your time on Debian, what would you work on? Enrico: If I could spend all my time on Debian, I would do a lot of software development: I love doing software development, but most of my development energy is spent on my paid work. I guess I would start my all your time in Debian by taking better care of the things that I m already doing, and by promoting them better so that I wouldn t end up being the only person maintaining them. After that, however, I reckon that I do have a tendency of noticing new, interesting problems in need(?) of a solution, and I guess I would end up wildly experimenting new ideas in Debian much like a victorian mad scientist. Which reminds me that I most definitely need minions! Where can I find minions? Raphael: You re the author of the Debian Community Guidelines. I have always felt that this document should be more broadly advertised. For example by integrating it in the Developer s Reference. What do you think? Enrico: The DCG was really a collection of tips to improve one s online communication, based on ideas and feedback that I collected from pestering many experienced and well-respected people for some time. Like every repository of common sense, I think it should be widely promoted but never ever turned into law. It wouldn t be a bad idea to mention it in the Developer s Reference, or to package it as a separate file in the developers-reference package. The reasons I haven t actively been pushing for that to happen are two: there isn t much in the DCG that is specific to Debian, and I don t have the resources to do a proper job maintaining it. It d be great if somebody could take over its maintenance and make it become some proper, widespread, easy-to-quote online reference document, like one of those HOWTOs that all serious people have read at some point in their lives. Raphael: What s the biggest problem of Debian? Enrico: It s sometimes hard to get feedback or help if you work on something unusual. That is partly to be expected, and partly probably due to me not having yet learnt how to get people involved in what I do. Raphael: What motivates you to continue to contribute year after year? Enrico: Debian keeps evolving, so there is always something to learn. And Debian is real, so everything I do is constantly measured against reality. What more intellectual stimulation could one possibly want? Raphael: Is there someone in Debian that you admire for their contributions? Enrico: I don t think I could reasonably list everyone I admire in Debian: pretty much in every corner of the project there is someone, sometimes not very well known, who is putting a lot of Quality in what they do. Someone who decided that X should work well in Debian or that Debian should work well for Y or that Z is something Debian people can rely on and makes sure that it is so. Those are the people who make sure Debian is and will be not just a hobby, but a base upon which I can rely for my personal and working life.
Thank you to Enrico for the time spent answering my questions. I hope you enjoyed reading his answers as I did.

Subscribe to my newsletter to get my monthly summary of the Debian/Ubuntu news and to not miss further interviews. You can also follow along on Identi.ca, Twitter and Facebook.

5 comments Liked this article? Click here. My blog is Flattr-enabled.

9 April 2011

Cyril Brulebois: Debian XSF News #9

This is the ninth Debian XSF News issue. As can be seen below, I m not yet decided how to present various items. This time, I ll try to gather all updated packages since the previous issue, grouped by category, with a single line summary. Lengthy comments come after that list of updated packages.
  1. Here come the updated packages, with contributors/uploaders between square brackets (Timo = Timo Aaltonen, JVdG = Julien Viard de Galbert, Robert = Robert Hooker). Protocol:
    • [KiBi] x11proto-core: new upstream release, bringing Sinhala support experimental.
    Libraries:
    • [Timo,KiBi] libx11: new upstream release, fixing some hang issues unstable.
    • [KiBi] libxi: new upstream release unstable.
    • [KiBi] libxkbcommon: finally accepted by ftpmasters, needed for wayland experimental.
    Server:
    • [KiBi] xorg-server: stable release 1.9.5, unlikely to cause regressions from the previous release candidate; in other words: a good candidate for testing if the Linux kernel migrates some day unstable.
    • [KiBi] xorg-server: first release candidate for the first stable bugfix release in the 1.10 series, which finally builds experimental.
    Drivers: Others:
    • [KiBi] xorg: originally a few tweaks to make it possible to install X on Hurd unstable
    • [KiBi] xorg: but wkhtmltopdf failed on several buildds, so I disabled PDF generation, and completed the switch to asciidoc mentioned in DXN#8 unstable.
    • [KiBi] xorg-sgml-doctools: new upstream release, adds support for docbook external references unstable.
    • [Robert,KiBi] xutils-dev: new util-macros release, and a version lookup file unstable.
  2. Why am I carefully uploading video driver packages to experimental only? Because bad regressions happen, on a regular basis; so it seems quite nice to keep well-tested versions in unstable for now. Once the X stack has migrated to testing (which as explained in DXN#7 and DXN#8 is waiting for the Linux kernel to migrate), new versions in unstable are welcome, so that one can tell easily whether bugs with those versions are regressions from the versions available in testing. In the meanwhile, one can build packages from the git repositories, using the debian-unstable branch (which is the default).
  3. Why am I so rash then, uploading input drivers to unstable as they are released? First of all, our waiting for the kernel means we have no issues on the 10-day delay front. Second of all, input bugs are usually fixed very quickly upstream (you can go there and thank Peter Hutterer in particular). So staying very close to whatever upstream ships makes some sense.
  4. Why am I uploading drivers twice? Since it isn t specific to the current situation, but a general question when it comes to supporting two versions of the server in parallel, I decided to document that in an handling multiple server versions thanks to experimental page. The answer to this specific question is available in the note at the bottom of that page.
See you in a few days for a follow-up Debian XSF News issue.

3 April 2011

Cyril Brulebois: Debian XSF News #8

This is the eighth Debian XSF News issue. For a change, I m going to use a numbered list, which should help telling people which item to look for when pointing to a given URL. Feel free to let me know if that seems like a nice idea or whether that hurts readability. Also, it was prepared several days ago already, so I m publishing it (with the needed bits of polishing it still needed) without mentioning what happened in the last few days (see you in the next DXN issue!).
  1. Let s start with a few common bugs reported over the past few weeks:
    • The server can crash due to some X Font Server (XFS) issue as reported upstream in FDO#31501 or in Debian as #616578. The easy fix is to get rid of FontPath in xorg.conf, or to remove the xfs package. It s deprecated anyway.
    • Xdm used to crash when started from init, but not afterwards (#617208). Not exactly fun to reproduce, but with the help of a VM, bisecting libxt to find the guilty commit was quite easy. After a quick upload with this commit reverted, a real fix was pushed upstream; a new upstream was released, packaged, and uploaded right after that.
    • We ve had several reports of flickering screens, which are actually due to upowerd s polling every 30 seconds: #613745.
    • Many bug reports were filed due to a regression on the kernel side for the 6.0.1 squeeze point release, leading to cursor issues with Intel graphics: #618665.
  2. Receiving several similar reports reminded me of the CurrentProblemsInUnstable page on the wiki, which is long unmaintained (and that s why I m not linking to it). I m not exactly sure what to do at this point, but I think having a similar page on http://pkg-xorg.alioth.debian.org/, linked from the how to report bugs page would make sense. Common issues as well as their solutions or workarounds for stable should probably go to the FAQ instead.
  3. As explained in DXN#7, we re waiting for the kernel to migrate to wheezy. The 2.6.38 upstream release was quickly pushed to unstable, which is great news, even if it s not really ready yet (since it s still failing to build on armel and mips).
  4. I ve been using markdown for our documentation, basically since it looked sufficient for our needs, and since I ve been using it to blog for years now, but it had some limitations. I ve been hearing a lot of nice things about asciidoc for a while (hi, Corsac!), so I gave it a quick shot. Being quite happy with it, I converted our documentation to asciidoc, which at the bare minimum buys us a nice CSS (at least nicer than the one I wrote ), and with automatic table of contents if we ask for it, which should help navigating to the appropriate place. A few drawbacks:
    • The syntax (or the parser s behaviour) changed a lot since lenny s version, so updating the online documentation broke badly. Thanks to the nice Alioth admins, the version from lenny-backports was quickly installed and the website should look fine.
    • The automatic table of contents is generated through JavaScript, which doesn t play nicely with wkhtmltopdf (WebKit-based HTML to PDF converter), since the table of contents gets pixelated in the generated PDF documents. We could use a2x to generate documents through the DocBook way, but that means dealing with XSL stylesheets as far as I can tell; that looks time-consuming and a rather low-priority task. But of course, contributions are welcome.
  5. When I fixed missing XSecurity (#599657) for squeeze, I didn t notice the 1.9 packages were forked right before that, so were affected too. I fixed it in sid since then (and in git for experimental). I noticed that when Ian reported a crash with large timeouts in xauth calls, which I couldn t reproduce since untrusted cookies without XSecurity don t trigger this issue. I reported that upstream as FDO#35066, which got marked as a duplicate of (currently restricted) FDO#27134. My patch is currently still waiting for a review.
  6. Let s mention upcoming updates, prepared in git, but not uploaded yet:
    • mesa 7.10.1, prepared by Chris (RAOF); will probably be uploaded to experimental, unless 7.10 migrates to testing first, in which case that update will target unstable.
    • Intel driver: Lintian s been complaining about the .so symlinks for a while, and I finally gave it a quick look. It seems one is supposed to put e.g. libI810XvMC.so.1 in /etc/X11/XvMCConfig to use that library, so the symlinks are indeed not needed at all, and I removed them.
    • Tias Guns and Timo Aaltonen introduced xinput-calibrator in a git repository; that s a generic touchscreen calibration tool.
  7. Here come the updated packages, with uploader between square brackets (JVdG = Julien Viard de Galbert, Sean = Sean Finney). For the next issue, I ll try to link to the relevant entries in the Package Tracking System.
    • [KiBi] libxt: to unstable, as mentioned above, with a hot fix, then with a real fix.
    • [KiBi] synaptics input driver: to unstable and experimental, fixing the FTBFS on GNU/kFreeBSD.
    • [KiBi] xterm: new upstream, to unstable.
    • [KiBi] libdrm: new upstream, to experimental. A few patches to hide private symbols were sent upstream, but I ve seen no reactions yet (and that apparently happened in the past already).
    • [KiBi] xorg-server 1.9.5rc1 then 1.9.5, to unstable.
    • [KiBi] xutils-dev to unstable: the bootstrap issue goes away, thanks to Steve s report.
    • [KiBi] libxp to unstable, nothing fancy, that s libxp
    • [KiBi] keyboard input driver: mostly documentation update, to unstable and experimental.
    • [KiBi] mouse input driver: fixes BSD issues, to unstable and experimental.
    • [KiBi] intel video driver: to experimental, but the debian-unstable branch can be used to build the driver against unstable s server.
    • [KiBi] xfixes: protocol to unstable, and library to experimental (just in case); this brings support for pointer barriers.
    • [JVdG] openchrome video driver: Julien introduced a debugging package, and got rid of the (old!) via transitional package. He also performed his first upload as a Debian Maintainer. Yay!
    • [KiBi] siliconmotion video driver: to unstable.
    • [KiBi] pixman: new upstream release candidate, to experimental
    • [Sean] last but not least: many compiz packages to experimental.

26 February 2011

Sylvain Le Gall: OCaml Debian News

... or don't shoot yourself in the foot. This is not a big secret, Debian Squeeze has been released. Right after this event, the OCaml Debian Task force was back in action -- with Stephane in the leading role. He has planned the transition to OCaml 3.12.0. We will proceed in two steps: a small transition of a reduced set of packages that can be transitioned before 3.12 and then the big transition. The reason for the small transition, is to avoid having to dep-wait (waiting for dependencies) of package upload by human. In -- a not so far -- past, the OCaml Debian Task force members were uploading packages by hand and waited for a full rebuild to go to the next step. This was long and cumbersome. We use now binNMU: it is binary only uploads -- with no source changes -- processed automatically by the release team and its infrastructure. This is far more effective and helps us to reduce the delay of the transition... The small transition is happening now!!! Don't update/upgrade your critical Debian installations with OCaml packages, you'll get a lot of removal if you do so. N.B. these removal are part of the famous Enforcing type-safe linking using package dependencies paper. As a side note, I am happy to announce that a full round of new OCaml packages has landed in Debian unstable: People aware of my current work, should notice that all the dependencies of OASIS are now in Debian unstable: ocaml-data-notation, ocamlify, ocaml-expect. This is a hint about the next OCaml Debian package, I will upload. You can also have a look at OASIS enabled packages (all the OASIS dependencies, ocaml-sqlexpr and ocaml-extunix). These packages have been generated using oasis2debian a tool to convert _oasis into debian/ packaging files. After these transition we will continue proceeding with standard upgrade work (e.g. camomile to 0.8.1). Sylvain Le Gall is an OCaml consultant working for OCamlCore SARL

23 December 2010

Rapha&#235;l Hertzog: People behind Debian: Mehdi Dogguy, release assistant

Mehdi Dogguy

Picture of Mehdi taken by Antoine Madet

Mehdi is a Debian developer for a bit more than a year, and he s already part of the Debian Release Team. His story is quite typical in that he started there by trying to help while observing the team do its work. That s a recurrent pattern for people who get co-opted in free software teams. Read on for more info about the release team, and Mehdi s opinion on many topics. My questions are in bold, the rest is by Mehdi (except for the additional information that I inserted in italics). Who are you? I m 27 years old. I grew up in Ariana in northern Tunisia, but have been living in Paris, France, since 2002. I m a PhD Student at the PPS laboratory where I study synchronous concurrent process calculi. I became interested in Debian when I saw one of my colleagues, Samuel Mimram (first sponsor and advocate) trying to resolve #440469, which is a bug reported against a program I wrote. We have never been able to resolve it but my intent to contribute was born there. Since then, I started to maintain some packages and help where I can. What s your biggest achievement within Debian? I don t think I had time to accomplish a lot yet :) I ve been mostly active in the OCaml team where we designed a tool to compute automatically the dependencies between OCaml packages, called dh-ocaml. This was a joint work with St phane Glondu, Sylvain Le Gall and Stefano Zacchiroli. I really appreciated the time spent with them while developing dh-ocaml. Some of the bits included in dh-ocaml have been included upstream in their latest release. I ve also tried to give a second life to the Buildd Status Pages because they were (kind of) abandoned. I intend to keep them alive and add new features to them. If you had a wand and could change one thing in Debian, what would that be? Make OCaml part of a default Debian installation :D But, since I m not a magician yet, I d stick to more realistic plans:
  1. A lot of desktop users fear Debian. I think that the Desktop installation offered by Debian today is very user-friendly and we should be able to attract more and more desktop users. Still, there is some work to be done in various places to make it even more attractive. The idea is trying to enhance the usability and integration of various tools together. Each fix could be easy or trivial but the final result would be an improved Desktop experience for our users. Our packaged software run well. So, each person can participate since the most difficult part is to find the broken scenarios. Fixes could be found together with maintainers, upstream or other interested people.

    I ll try to come up with a plan, a list of things that need polishing or fixes and gather a group of people to work on it. I d definitely be interested in participating in such a project and I hope that I ll find other people to help. If the plan is clear enough and has well described objectives and criteria, it could be proposed to the Release Team to consider it as a Release Goal for Wheezy.

  2. NMUs are a great way to make things move forward. But, sometimes, an NMU could break things or have some undesirable effects. For now, NMUers have to manually track the package s status for some time to be sure that everything is alright. It could be a good idea to be auto-subscribed to the bugs notifications of NMUed packages for some period of time (let s say for a month) to be aware of any new issues and try to fix them. NMUing a package is not just applying a patch and hitting enter after dput. It s also about making sure that the changes are correct and that no regressions have been introduced, etc

  3. Orphaned packages: It could be considered as too strict and not desired, but what about not keeping orphaned and buggy packages in Testing? What about removing them from the archive if they are buggy and still unmaintained for some period? Our ftp archive is growing. It could make sense to do some (more strict) housekeeping. I believe that this question can be raised during the next QA meeting. We should think about what we want to do with those packages before they rot in the archive.
[Raphael Hertzog: I would like to point out that pts-subscribe provided by devscripts makes it easy to temporarily subscribe to bug notifications after an Non-Maintainer Upload (NMU).] You re a Debian developer since August 2009 and you re already an assistant within the Release Management team. How did that happen and what is this about? In the OCaml team, we have to start a transition each time we upload a new version of the OCaml compiler (actually, for each package). So, some coordination with the Release Team is needed to make the transition happen. When we are ready to upload a new version of the compiler, we ask the Release Team for permission and wait for their ack. Sometimes, their reply is fast (e.g. if their is no conflicting transition running), but it s not always the case. While waiting for an ack, I used to check what was happening on debian-release@l.d.o. It made me more and more interested in the activities of the Release Team. Then (before getting my Debian account), I had the chance to participate in DebConf9 where I met Luk and Phil. It was a good occasion to see more about the tools used by the Release Team. During April 2010, I had some spare time and was able to implement a little tool called Jamie to inspect the relations between transitions. It helps us to quickly see which transitions can run in parallel, or what should wait. And one day (in May 2010, IIRC), I got offered by Adam to join the team. As members of the Release Team, we have multiple areas to work on:
  1. Taking care of transitions during the development cycle, which means making sure that some set of packages are correctly (re-)built or fixed against a specific (to each transition) set of packages, and finding a way to tell Britney that those packages can migrate and it would be great if she also shared the same opinion. [Raphael Hertzog: britney is the name of the software that controls the content of the Testing distribution.]
  2. Paying attention to what is happening in the archive (uploads, reported RC bugs, etc ). The idea is to try to detect unexpected transitions, blocked packages, make sure that RC bug fixes reach Testing in a reasonable period of time, etc
  3. During a freeze, making sure that unblock requests and freeze exceptions are not forgotten and try to make the RC bug count decrease.
There are other tasks that I ll let you discover by joining the game. Deciding what goes (or not) in the next stable release is a big responsibility and can be incredibly difficult at times. You have to make judgement calls all the time. What are your own criteria? That s a very hard to answer question (at least, for me). It really depends on the case . I try to follow the criteria that we publish in each release update. Sometimes, an unblock request doesn t match those criteria and we have to decide what to accept from the set of proposed changes. Generally, new features and non-fixes (read new upstream versions) changes are not the kind of changes that we would accept during the freeze. Some of them could be accepted if they are not intrusive, easy and well defended. When, I m not sure I try to ask other members of the Release Team to see if they share my opinion or if I missed something important during the review. The key point is to have a clear idea on what s the benefit of the proposed update, and compare it to the current situation. For example, accepting a new upstream release (even if it fixes some critical bugs) is taking a risk to break other features and that s why we (usually) ask for a backported fix. It s also worth noticing that (most of the time) we don t decide what goes in, but (more specifically) what version of a given package goes in and try to give to the contributors an idea on what kind of changes are acceptable during the freeze. There are some exceptions though. Most of them are to fix a critical package or feature. Do you have plans to improve the release process for Debian Wheezy? We do have plans to improve every bit in Debian. Wheezy will be the best release ever. We just don t know the details yet :) During our last meeting in Paris last October, the Release Team agreed to organize a meeting after Squeeze s release to discuss (among other questions) Wheezy s cycle. But the details of the meeting are not fixed yet (we still have plenty of time to organize it and other more important tasks to care about). We would like to be able to announce a clear roadmap for Wheezy and enhance our communication with the rest of the project. We certainly want to avoid what happened for Squeeze. Making things a bit more predictable for developers is one of our goals. Do you think the Constantly Usable Testing project will help? The original idea by Joey Hess is great because it allows d-i developers to work with a stable version of the archive. It allows them to focus on the new features they want to implement or the parts they want to fix (AIUI). It also allows to have constantly available and working installation images. Then, there is the idea of having a constantly usable Testing for users. The idea seems nice. People tend to like the idea behind CUT because they miss some software disappearing from Testing and because of the long delays for security fixes to reach Testing. If the Release Team has decided to remove a package from Testing, I think that there must be a reason for that. It either means that the software is broken, has unfixed security holes or was asked for the removal by its maintainer. I think that we should better try to spend some time to fix those packages, instead of throwing a broken version in a new suite. It could be argued that one could add experimental s version in CUT (or sid s) but every user is free to cherry-pick packages from the relevant suite when needed while still following Testing as a default branch. Besides, it s quite easy to see what was removed recently by checking the archive of debian-testing-changes or by querying UDD. IMO, It would be more useful to provide a better interface of that archive for our users. We could even imagine a program that alerts the user about installed software that got recently removed from Testing, to keep the user constantly aware any issue that could affect his machine. About the security or important updates, one has to recall the existence of Testing-security and testing-proposed-updates that are used specifically to let fixes reach Testing as soon as possible when it s not possible to go through Unstable. I m sure that the security team would appreciate some help to deal with security updates for Testing. We also have ways to speed migrate packages from Unstable to Testing. I have to admit that I m not convinced yet by the benefits brought by CUT for our users.
Thank you to Mehdi for the time spent answering my questions. I hope you enjoyed reading his answers as I did. Subscribe to my newsletter to get my monthly summary of the Debian/Ubuntu news and to not miss further interviews. You can also follow along on Identi.ca, Twitter and Facebook.

2 comments Liked this article? Click here. My blog is Flattr-enabled.

4 July 2010

Torsten Landschoff: Postprocessing conference videos

I was planning to attend DebConf New York this year, but for a number of reasons I decided not to go. Fortunately, Ferdinand Thommes organized a MiniDebConf in Berlin at LinuxTag and I managed to attend. Thanks, Ferdinand! There were a number of interesting Talks. I especially liked the talk of our DPL, and those about piuparts and git-buildpackage. In contrast to the other LinuxTag talks, we had a livestream of our talks and recorded (most) of them. The kudos for setting this up goes to Alexander Wirt, who spent quite a few hours to get it up and running. I have to apologize for being late in bringing my Notebook, which was intended to do the theora encoding of the livestream. This was a misunderstanding on my part, I should have known that this is not going to be setup in the night before show time So to compensate the extra hours he had to put in for me, I offered to do the post processing of the videos. Basic approach for post processing The main goal of post processing the videos was (of course) to compress them to a usable size from the original 143 GB. I also wanted to have a title on each video, and show the sponsors at the end of the video. My basic idea to implement that consisted of the following steps:
  1. Create a title animation template.
  2. Generate title animations from template for all talks.
  3. Use a video editor to create a playlist of the parts title talk epilogue.
  4. Run the video editor in batch mode to generate the combined video.
  5. Encode the resulting video as ogg theora.
As always with technology, it turned out that the original plan needed a few modifications. Title animations
<video controls="controls" height="288" src="http://www.landschoff.net/blog/uploads/2010/07/mdc2010_title_anim1.ogv" width="360 "></video>
Originally I wanted to use Blender for the title animation, but I knew it is quite a complicated bit of software. So I looked for something simpler, and stumbled across an article that pointed me towards Synfig Studio for 2D animation. This is also in Debian, so I gave it a try. I was delighted that Synfig Studio has a command line renderer which is just called synfig and that the file format is XML, which would make it simple to batch-create the title animations. My title template can be found in this git repository. Batch creation of title animations I used a combination of make and a simple python script to replace the author name and the title of the talk into the synfig XML file. The data for all talks is another XML file talks.xml. Basically, I used a simple XPath expression to find the relevant text node and change the data using the ElementTree API of lxml python module. The same could be done using XSLT of course (for a constant replacement, see this file) but I found it easier to combine two XML files in python. Note that I create PNG files with synfig and use ffmpeg to generate a DV file from those. Originally, I had synfig create DV files directly but those turned out quite gray for some reason. I am now unable to reproduce this problem. Combining the title animation with the talk For joining the title animation with the talk, I originally went with OpenShot, which somebody of the video team had running at the conference. My idea was to mix a single video manually and just replace the underlying data files for each talk. I expected that this would be easy using the openshot-render command, which renders the output video from the input clips and the OpenShot project file. However, OpenShot stores the video lengths in the project file and will take those literally, so this did not work for talks of different play times I considered working with Kino or Kdenlive but they did not look more appropriate for this use case. I noticed that OpenShot and Kdenlive both use the Media Lovin Toolkit under the hood, and OpenShot actually serializes the MLT configuration to $HOME/.openshot/sequence.xml when rendering. I first tried to read that XML file from python (using the mlt python bindings from the python-mlt2 package) but did not find an API function to do that. So I just hard coded the video sequence in python. I ran into a few gotchas on the way: Things to improve While the results look quite okay for me now, there is a lot of room for improvement. Availability

13 March 2010

Lucas Nussbaum: RC bugs of the week

I just couldn t resist I joined the game, but did it the other way around. I could only file 51 new FTBFS (Fail To Build From Source) bugs this time. Looks like Squeeze is getting closer! I ve also been doing rebuilds of Ubuntu lucid. There are currently 561 packages that fail to build from source in lucid/amd64, versus 430 in sid/amd64 (I will start rebuilding squeeze instead of sid after the freeze). Surprisingly, only 131 packages fail in both. I would have expected that number to be much higher. The 51 new FTBFS bugs:
#573648: gnome-chemistry-utils: FTBFS: Nonexistent build-dependency: libgoffice-0-8-dev
#573649: /api/package-list is no longer compressed
#573650: /api/package-list is no longer compressed
#573651: virt-top: FTBFS: configure: error: Cannot find required OCaml package extlib
#573652: heartbeat: FTBFS: Nonexistent build-dependency: libcluster-glue-dev
#573653: abiword: FTBFS: Nonexistent build-dependency: libgoffice-0-8-dev
#573654: helium: FTBFS: Makefile: hGetLine: invalid argument (Invalid or incomplete multibyte or wide character)
#573655: mlton-cross: FTBFS: /bin/sh: wget: not found
#573656: pytest-xdist: FTBFS: ImportError: No module named setuptools
#573657: libfile-fu-perl: FTBFS: tests failed
#573658: libphysfs: FTBFS: docs/man/man3/PHYSFS_addToSearchPath.3: No such file or directory at /usr/bin/dh_installman line 127.
#573659: ecl: FTBFS: rm: cannot remove /build/user-ecl_10.2.1-1-amd64-S2bazb/ecl-10.2.1/debian/ecl/usr/share/info/dir : No such file or directory
#573660: /api/package-list is no longer compressed
#573661: libdbix-class-schema-loader-perl: FTBFS: tests failed
#573662: /api/package-list is no longer compressed
#573663: libthai: FTBFS: /usr/bin/install: cannot stat ./../doc/man/man3/th_render_text_tis.3 : No such file or directory
#573664: /api/package-list is no longer compressed
#573665: hunspell-dict-ko: FTBFS: build hangs
#573666: plexus-active-collections: FTBFS: missing junit:junit:jar:debian
#573667: nuapplet: FTBFS: Can t find gnutls library developpement files!
#573668: binutils-z80: FTBFS: /bin/sh: cannot open /build/user-binutils-z80_2.20-3-amd64-MwJBIl/binutils-z80-2.20/binutils-2.20.tar.bz2: No such file
#573669: keynav: FTBFS: keynav.c:799: error: too few arguments to function xdo_mousemove
#573670: moblin-panel-applications: FTBFS: moblin-netbook-launcher.c:1640: undefined reference to mx_scroll_view_get_vscroll_bar
#573671: tetradraw: FTBFS: /bin/bash: line 1: automake-1.7: command not found
#573672: beid: FTBFS: rm: cannot remove _src/eidmw/bin/eidmw_*.qm : No such file or directory
#573673: swfdec-gnome: FTBFS: Nonexistent build-dependency: libswfdec-0.8-dev
#573674: swfdec-mozilla: FTBFS: Nonexistent build-dependency: libswfdec-0.8-dev
#573675: jasmin-sable: FTBFS: Error: JAVA_HOME is not defined correctly.
#573676: corosync: FTBFS: Depends field, reference to libcorosync4 : error in version: version string is empty
#573677: banshee-extension-mirage: FTBFS: ./PlaylistGeneratorSource.cs(469,39): error CS0539: Banshee.PlaybackController.IBasicPlaybackController.Next in explicit interface declaration is not a member of interface
#573678: gnucash: FTBFS: Nonexistent build-dependency: libgoffice-0-8-dev
#573679: libwx-perl: FTBFS: xvfb-run: error: Xvfb failed to start
#573680: /api/package-list is no longer compressed
#573681: fso-usaged: FTBFS: fsobasics-2.0.vapi:110.2-110.84: error: FsoFramework already contains a definition for AsyncWorkerQueue
#573682: libiscwt-java: FTBFS: Nonexistent build-dependency: libswt-gtk-3.4-java
#573683: nordugrid-arc-nox: FTBFS: ld: cannot find -larccrypto
#573684: cssc: FTBFS: rm: cannot remove /build/user-cssc_1.2.0-1-amd64-XCK7aQ/cssc-1.2.0/debian/cssc/usr/share/info/dir* : No such file or directory
#573685: django-threaded-multihost: FTBFS: distutils.errors.DistutilsError: Could not find suitable distribution for Requirement.parse( setuptools-hg )
#573686: /api/package-list is no longer compressed
#573687: davical: FTBFS: /bin/sh: phpdoc: not found
#573688: gauche-gtk: FTBFS: gauche-gtk.c:450: error: too few arguments to function Scm_Apply
#573689: quilt: FTBFS: tests failed
#573690: pyabiword: FTBFS: Nonexistent build-dependency: libgoffice-0-8-dev
#573691: flumotion: FTBFS: configure: error: You need at least version 2.0.1 of Twisted
#573692: libnet-dns-zone-parser-perl: FTBFS: tests failed
#573693: nip2: FTBFS: Nonexistent build-dependency: libgoffice-0-8-dev
#573694: hedgewars: FTBFS: Error: Illegal parameter: -Nu
#573695: epsilon: FTBFS: FAILED (skips=5, expectedFailures=1, errors=7, successes=229)
#573696: python-glpk: FTBFS: Unsatisfiable build-dependency: libglpk-dev(inst 4.43-1 ! <= wanted 4.38.999)
#573697: libnanoxml2-java: FTBFS: cp: cannot stat /usr/share/doc/default-jdk-doc/api/package-list.gz : No such file or directory
#573698: doxia-maven-plugin: FTBFS: Reason: Cannot find parent: org.apache.maven.doxia:doxia

28 November 2009

Stefano Zacchiroli: Enforcing type-safe linking using package dependencies

Eclectic paper: dh-ocaml In my day job as a researcher, I mostly publish papers along the lines of my main research interests (theorem proving, web technologies, formal methods applied to software engineering, ...). Some time though, I just come up to some eclectic idea, not strictly related to my job, that I feel like cooking up as a paper to be reviewed by some scientific venue. It happened some weeks ago with dh-ocaml, the package implementing the new dependency scheme for OCaml-related packages in Debian. It took us (as in Debian OCaml maintainers) several years to get it right and satisfactory for maintainers, users, release team, etc. The problem which dh-ocaml addresses is that, differently than C and other system-level languages, OCaml breaks ABI compatibility very often, due to the need of ensuring type safety across different libraries at link time. Other similar strongly typed languages, such as Haskell, behave similarly. This is at odds with the implicit assumption of forward-compatibility (unless otherwise "stated", e.g. with soname changes) that is relied upon by versioned dependencies in distributions like Debian. This discussion, the analysis of possible solutions, and the description of the solution we have actually implemented in dh-ocaml (called ABI approximation) turned out to be interesting for the French functional programming academic community: the paper on dh-ocaml has been accepted at forthcoming JFLA 2010. It is no rocket science :-) , but people maintaining programs and libraries written in languages with concerns similar to OCaml's (e.g. Haskell, hello nomeata) might want to have a look.

6 April 2009

Stefano Zacchiroli: ocaml 3.11 in testing

OCaml 3.11 has migrated to testing Quoting from Dato:
* St phane Glondu [Sat, 04 Apr 2009 14:01:35 +0200]:
> Adeodato Sim  a  crit :
> >> Please schedule the attached requests for the OCaml 3.11.0 transition.
> > Scheduled, with the glitches noted below. Please get back to us with the
> > needed wanna-build actions.
> All packages that needed recompilation or sourceful uploads for the
> OCaml 3.11.0 transition are now compiled and available in unstable.
> I guess migrating ocaml to testing can now be considered...
This is now done:
ocaml    3.11.0-5   testing
ocaml    3.11.0-5   unstable
Congratulations for making of this transition one of the less painful
I ve ever had to deal with, though I guess being a quite self-contained
set of packages and not having ties to other ongoing
transitions really
helped. ;-)
Thanks!,

IOW OCaml 3.11 has just migrated to Debian testing YAY \o/ Congrats and thanks to all the people who contributed. Special kudos go to the (not so) newbies of the Debian OCaml Task Force, and in particular to Stephane Glondu and Mehdi Dogguy: they have contributed work to a lot of packages and have also developed new tools which helped monitoring the transition effectively. Keep up the good work.

28 March 2009

Biella Coleman: Circuits-FlowCharts-Biblegrams

After telling my friend about a talk on flow charts, brains, and psychology I attended today, my friend pointed me to his amazing art-a-gram of relationships as well as this even more out of this world biblegram. Damn.

Biella Coleman: Circuits-FlowCharts-Biblegrams

After telling my friend about a talk on flow charts, brains, and psychology I attended today, my friend pointed me to his amazing art-a-gram of relationships as well as this even more out of this world biblegram. Damn.

19 January 2009

Ren&#233; Mayorga: yay!, I m a Debian Developer \o/

following the traditional post.
I got an email today, telling me that I m a full Debian Developer now, I started my NM process on 2007-12-10 it took a bit more then a year, and now I m the first DD of El Salvador I have to thanks to all people that help me out, anibal, gregoa, dmn, mlt(Marcela), xerakko, twerner, benh and some more people that I don t remember.

21 December 2008

Joerg Jaspert: Planet I18N

And there it is. We now can have Planet Debian in other languages than english. The first one is Spanish but the framework allows for any number. In case you do want to have Planet Debian in a different language, follow this steps: (2LETTER means the 2char ISO Code of your language. Like es for the spanish one, de for a german, etc.) You need to have 10 feeds ready and filled in in the file you send us. All the feeds should provide a category that has entries with only the language for this new instance. (Otherwise it won t make sense to create a Planet instance for a language ) Planet now features a link section called Planet I18N right above the Subscriptions. That will always list the currently active other language Planets. The various planets share the common files like CSS, graphics and all the hackergotchis.
It was a bit of work to get to the point where we are able to add multiple planets to the system. First I had to upgrade the code behind it to a newer one, Planet Venus instead of Planet Planet. Main advantage being that it uses multi-threading to fetch the feeds, greatly speeding up that part. And this weekend I had to rewrite all the templates, going away from old (and simple) htmltmpl format to one based on Django. Htmltmpl simply has no way to deal with our Planet now anymore, especially as it has no way to access all config file values. Which I need for the translated strings, or I would need one template per language. Which would be insane. Various other things in Django also make Django templates a way easier thing to deal with, for example simply saying
  % firstof var1 var2 var3 "nothing" % 
instead of having a huge if-then-elsif-elsif-lalala chain. And we do have that multiple times.

Next.

Previous.